Qualitative measures show that an existing artificial neural network can perform invariant object recognition. Quantification of the level of performance of cells within this network, however, is shown to be problematic.
In line with contemporary neurophysiological analyses (e.g. Optican and Richmond, 1987; Tovee et al., 1993), a simplistic form of Shannon's information theory was applied to this performance measurement task. However, the results obtained are shown not to be useful – the perfect reliability of artificial cell responses highlights the implicit decoding power of pure Shannon information theory.
Refinement of the definition of cell performance in terms of usable Shannon information (Shannon information which is available in a “useful” form) leads to the development of two novel performance measures. First, a cell's “information trajectory” quantifies standard information-theoretic performance across a range of decoders of increasing complexity – information made available using simple decoding is weighted more strongly than information only available using more complex decoding. Second, the nature of the application (the task the network attempts to solve) is used to design a decoder of appropriate complexity, leading to an exceptionally simple and reliable information-theoretic measure. Comparison of the various measures' performance in the original problem domain show the superiority of the second novel measure.
The chapter concludes with the observation that reliable application of Shannon's information theory requires close consideration of the form in which signals may be decoded – in short, not all measurable information may be usable information.
Introduction
This chapter discusses an approach to performance measurement using information theory in the context of a model of invariant object recognition. Each of these terms is discussed in turn in the following sections.